24 research outputs found

    Weighted ancestors in suffix trees

    Full text link
    The classical, ubiquitous, predecessor problem is to construct a data structure for a set of integers that supports fast predecessor queries. Its generalization to weighted trees, a.k.a. the weighted ancestor problem, has been extensively explored and successfully reduced to the predecessor problem. It is known that any solution for both problems with an input set from a polynomially bounded universe that preprocesses a weighted tree in O(n polylog(n)) space requires \Omega(loglogn) query time. Perhaps the most important and frequent application of the weighted ancestors problem is for suffix trees. It has been a long-standing open question whether the weighted ancestors problem has better bounds for suffix trees. We answer this question positively: we show that a suffix tree built for a text w[1..n] can be preprocessed using O(n) extra space, so that queries can be answered in O(1) time. Thus we improve the running times of several applications. Our improvement is based on a number of data structure tools and a periodicity-based insight into the combinatorial structure of a suffix tree.Comment: 27 pages, LNCS format. A condensed version will appear in ESA 201

    The Online House Numbering Problem: Min-Max Online List Labeling

    Get PDF
    We introduce and study the online house numbering problem, where houses are added arbitrarily along a road and must be assigned labels to maintain their ordering along the road. The online house numbering problem is related to classic online list labeling problems, except that the optimization goal here is to minimize the maximum number of times that any house is relabeled. We provide several algorithms that achieve interesting tradeoffs between upper bounds on the number of maximum relabels per element and the number of bits used by labels

    Fully Dynamic Matching in Bipartite Graphs

    Full text link
    Maximum cardinality matching in bipartite graphs is an important and well-studied problem. The fully dynamic version, in which edges are inserted and deleted over time has also been the subject of much attention. Existing algorithms for dynamic matching (in general graphs) seem to fall into two groups: there are fast (mostly randomized) algorithms that do not achieve a better than 2-approximation, and there slow algorithms with \O(\sqrt{m}) update time that achieve a better-than-2 approximation. Thus the obvious question is whether we can design an algorithm -- deterministic or randomized -- that achieves a tradeoff between these two: a o(m)o(\sqrt{m}) approximation and a better-than-2 approximation simultaneously. We answer this question in the affirmative for bipartite graphs. Our main result is a fully dynamic algorithm that maintains a 3/2 + \eps approximation in worst-case update time O(m^{1/4}\eps^{/2.5}). We also give stronger results for graphs whose arboricity is at most \al, achieving a (1+ \eps) approximation in worst-case time O(\al (\al + \log n)) for constant \eps. When the arboricity is constant, this bound is O(logn)O(\log n) and when the arboricity is polylogarithmic the update time is also polylogarithmic. The most important technical developement is the use of an intermediate graph we call an edge degree constrained subgraph (EDCS). This graph places constraints on the sum of the degrees of the endpoints of each edge: upper bounds for matched edges and lower bounds for unmatched edges. The main technical content of our paper involves showing both how to maintain an EDCS dynamically and that and EDCS always contains a sufficiently large matching. We also make use of graph orientations to help bound the amount of work done during each update.Comment: Longer version of paper that appears in ICALP 201

    Conditional Lower Bounds for Space/Time Tradeoffs

    Full text link
    In recent years much effort has been concentrated towards achieving polynomial time lower bounds on algorithms for solving various well-known problems. A useful technique for showing such lower bounds is to prove them conditionally based on well-studied hardness assumptions such as 3SUM, APSP, SETH, etc. This line of research helps to obtain a better understanding of the complexity inside P. A related question asks to prove conditional space lower bounds on data structures that are constructed to solve certain algorithmic tasks after an initial preprocessing stage. This question received little attention in previous research even though it has potential strong impact. In this paper we address this question and show that surprisingly many of the well-studied hard problems that are known to have conditional polynomial time lower bounds are also hard when concerning space. This hardness is shown as a tradeoff between the space consumed by the data structure and the time needed to answer queries. The tradeoff may be either smooth or admit one or more singularity points. We reveal interesting connections between different space hardness conjectures and present matching upper bounds. We also apply these hardness conjectures to both static and dynamic problems and prove their conditional space hardness. We believe that this novel framework of polynomial space conjectures can play an important role in expressing polynomial space lower bounds of many important algorithmic problems. Moreover, it seems that it can also help in achieving a better understanding of the hardness of their corresponding problems in terms of time

    Nonlocal mechanism for cluster synchronization in neural circuits

    Full text link
    The interplay between the topology of cortical circuits and synchronized activity modes in distinct cortical areas is a key enigma in neuroscience. We present a new nonlocal mechanism governing the periodic activity mode: the greatest common divisor (GCD) of network loops. For a stimulus to one node, the network splits into GCD-clusters in which cluster neurons are in zero-lag synchronization. For complex external stimuli, the number of clusters can be any common divisor. The synchronized mode and the transients to synchronization pinpoint the type of external stimuli. The findings, supported by an information mixing argument and simulations of Hodgkin Huxley population dynamic networks with unidirectional connectivity and synaptic noise, call for reexamining sources of correlated activity in cortex and shorter information processing time scales.Comment: 8 pges, 6 figure

    Property matching and weighted matching

    Get PDF
    AbstractIn many pattern matching applications the text has some properties attached to its various parts. Pattern Matching with Properties (Property Matching, for short), involves a string matching between the pattern and the text, and the requirement that the text part satisfies some property. Some immediate examples come from molecular biology where it has long been a practice to consider special areas in the genome by their structures.It is straightforward to do sequential matching in a text with properties. However, indexing in a text with properties becomes difficult if we desire the time to be output dependent. We present an algorithm for indexing a text with properties in O(nlog|Σ|+nloglogn) time for preprocessing and O(|P|log|Σ|+toccπ) per query, where n is the length of the text, P is the sought pattern, Σ is the alphabet, and toccπ is the number of occurrences of the pattern that satisfy some property π.As a practical use of Property Matching we show how to solve Weighted Matching problems using techniques from Property Matching. Weighted sequences have recently been introduced as a tool to handle a set of sequences that are not identical but have many local similarities. The weighted sequence is a “statistical image” of this set, where we are given the probability of every symbol’s occurrence at every text location. Weighted matching problems are pattern matching problems where the given text is weighted.We present a reduction from Weighted Matching to Property Matching that allows off-the-shelf solutions to numerous weighted matching problems including indexing, swapped matching, parameterized matching, approximate matching, and many more. Assuming that one seeks the occurrence of pattern P with probability ϵ in weighted text T of length n, we reduce the problem to a property matching problem of pattern P in text T′ of length O(n(1ϵ)2log1ϵ)
    corecore